- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0001000003000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Shafto, Patrick (4)
-
Folke, Tomas (3)
-
Vong, Wai Keen (2)
-
Yang, Scott Cheng-Hsin (2)
-
Yang, Scott Cheng‐Hsin (2)
-
Anderson, Sean (1)
-
Sojitra, Ravi B. (1)
-
Yu, Yue (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& *Soto, E. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
- Filter by Editor
-
-
null (2)
-
Hohil, Myron E. (1)
-
Pham, Tien (1)
-
Solomon, Latasha (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
null (Ed.)Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees’ inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI’s classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI’s judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.more » « less
-
Folke, Tomas; Yang, Scott Cheng-Hsin; Anderson, Sean; Shafto, Patrick (, Proceedings of SPIE)Pham, Tien; Solomon, Latasha; Hohil, Myron E. (Ed.)
-
Yang, Scott Cheng‐Hsin; Folke, Tomas; Shafto, Patrick (, Applied AI Letters)null (Ed.)
-
Yang, Scott Cheng‐Hsin; Vong, Wai Keen; Yu, Yue; Shafto, Patrick (, Topics in Cognitive Science)
An official website of the United States government
